Virtual reality in ergonomics by wearable devices: experiences from the automotive sector

Chiara Carnazzo (Wellbeing and Health & Safety – Ergonomics, Stellantis Enlarged Europe, Torino, Italy)
Stefania Spada (Wellbeing and Health & Safety – Ergonomics, Stellantis Enlarged Europe, Torino, Italy)
Sebastiano Lamacchia (Digital Factory, Competence Industry Manufacturing 4.0, Torino, Italy)
Federico Manuri (Department of Control and Computer Engineering, Politecnico di Torino, Torino, Italy)
Andrea Sanna (Department of Control and Computer Engineering, Politecnico di Torino, Torino, Italy)
Maria Pia Cavatorta (Department of Mechanical and Aerospace Engineering, Politecnico di Torino, Torino, Italy)

Journal of Workplace Learning

ISSN: 1366-5626

Article publication date: 15 August 2024

Issue publication date: 10 October 2024

195

Abstract

Purpose

Preventive ergonomics is essential to protecting the health and safety of workers as is recognizing human variability. The purpose of this paper is to describe a Unity-based application designed for three-dimensional postural analysis and visualizations using motion capture data. Integration with virtual reality (VR) technologies allows the user to be immersed in the simulated working environment without the need for a physical prototype. The proposed application aims to facilitate the application of ergonomic principles in workplace design and assessment for a proactive, participatory and inclusive approach to worker well-being.

Design/methodology/approach

The authors developed an application that leverages motion capturing techniques and VR technologies and aims to support the analysts in the ergonomic assessment of physical prototypes as well as future workplaces. An innovative postural prediction module helps the analyst understanding what postures different users are likely to assume in the interaction with the workplace from a single data record.

Findings

The functionalities of the proposed application are illustrated on some case studies, presenting how different information is made available and can support workplace analysts and designers in an industrial context.

Originality/value

This paper provides insights into the experience and research carried out by an automotive company in the application of wearable sensors and VR to support a proactive and participatory approach to workplace ergonomics.

Keywords

Citation

Carnazzo, C., Spada, S., Lamacchia, S., Manuri, F., Sanna, A. and Cavatorta, M.P. (2024), "Virtual reality in ergonomics by wearable devices: experiences from the automotive sector", Journal of Workplace Learning, Vol. 36 No. 7, pp. 621-635. https://doi.org/10.1108/JWL-03-2024-0064

Publisher

:

Emerald Publishing Limited

Copyright © 2024, Chiara Carnazzo, Stefania Spada, Sebastiano Lamacchia, Federico Manuri, Andrea Sanna and Maria Pia Cavatorta.

License

Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


1. Introduction

The introduction of new information and communication technologies into the factory environment is driving a profound change in the manufacturing world. In the smart factory, a key element of Industry 4.0 envisioned by the German Government in 2010 (Zuehlke, 2010), systems become cyber-physical, interact with each other, monitor and validate physical processes, creating a virtual copy of the physical world and making decisions based on complex data analysis is becoming a reality (Shi et al., 2020). These concepts are revolutionizing industry in all fields, and the automotive sector is one of the forerunners and most involved.

Virtualization and simulation of manufacturing processes generate several advantages in terms of cost and time, as well as optimizing assembly line design and studying human–machine interaction. Taking advantage of technological innovations is essential in today's manufacturing environments, which are characterized by an intricate collaboration paradigm encompassing user, machine and system (Di Pardo et al., 2008). Incorporating ergonomic principles during the design phase can mitigate adverse outcomes, such as deployment delays, diminished quality and reduced usability in the final system. It also helps avoid additional costs associated with subsequent modifications needed to enhance or revise the system at a later stage. Various studies and industry trends support the awareness of human factors (Isusi, 2020; Badri et al., 2018; Turner and Oyekan, 2023).

In this context, virtual reality (VR) is a pivotal technology in research that is worth leveraging and which may serve as an advanced human–computer interface featuring real-time simulations of the real world and various interactions with digital objects through multiple sensory channels (Burdea and Coiffet, 2003). VR finds applications across various fields such as entertainment, medicine, culture, marketing, education and cultural heritage. A particularly noteworthy domain in recent years has been industrial prototyping (da Silva et al., 2020). Until a few years ago, measures taken to address system faults and accidents primarily focused on reducing risk factors. However, advancements in three-dimensional (3D) design software and virtual simulators now empower designers to integrate VR technologies in the design process and to test and validate tools in a simulated environment before entering the physical prototyping phase (Dias Barkokebas and Li, 2023).

In a virtual environment, the interaction between the human and the workplace can be simulated using digital human models (virtual mannequins), while the workplace and workstation can be imported using CAD models. Mannequins are virtual representations of human figures that adhere to natural proportions and that can be controlled and animated via direct or inverse kinematics (IK) (Castellone et al., 2017). Virtual mannequins have several application scopes, many in line with the principles of workplace learning for individuals, groups and teams: consider the opportunity of learning the assembly sequence of specific mechanical parts in immersive virtual reality, providing training for operators when the physical prototype has not yet been built (Caputo et al., 2019).

The aim of this paper is to present a Unity-based application for workplace design and assessment that exploits Inertial Measurement Unit (IMU) sensors for motion data recording. Integration with VR technologies allows the user to be immersed in the simulated working environment without the need for a physical prototype. The application is designed to support analysts in assessing workplace design through wearable devices but can also be used for on-the-job training. An innovative feature of the presented application is the postural prediction of users belonging to an anthropometric percentile different from that of the individual on whom the motion capturing data were recorded. Recognizing human variability is fundamental to ergonomics to ensure that reachability for small users and body space issues of larger users are met, while ensuring postural comfort for all users. On the other hand, it is complex and costly to organize several testing sessions to include users of different height.

The article is organized into sections. A background section explores the need for automated ergonomic assessments based on sensors and virtual reality technologies and is followed by a synthetic description of the Unity-based application that was designed by the authors to support analysts in assessing workers' postures and body movements during a work task. Postural prediction on users belonging to anthropometric percentiles other than that of the individual on whom the motion capture data were recorded is then introduced, and some application cases are presented. Finally, conclusions and future work are addressed.

2. Background

A major challenge for ergonomics in a world where the average age of workers is steadily increasing as is the demand for greater productivity is to improve employee well-being by designing a work environment that can preserve musculoskeletal health. Musculoskeletal disorders (MSDs) affect numerous workers throughout Europe with injuries and disorders of muscles, nerves, tendons, joints and spinal discs (Isusi, 2020). These health problems range from pain to more serious medical conditions that can result in lost work days. For this reason, addressing workplace design issues takes on a double significance: improving workers’ well-being and enhancing productivity (Beevis, 2003; Oxenburgh, 2010).

Risk factors for MSDs include repetitive movements, awkward postures and handling of loads. MSDs are one of the most common work-related ailments and represent about 45% of occupational diseases in Europe. Throughout Europe, MSDs affect millions of workers and cost employers billions of euros (Isusi, 2020). Occupational diseases and hazards can also relate to a more general indicator of the health status as well as of the working conditions of a population. Country's national laws define how working conditions are to be evaluated (Buckle and Devereux, 2002). Thus, the interest of companies in establishing effective and efficient workplace design methodologies is evident. Launis et al. (1996) define the workstation design process as “the activity which leads to the birth of the workplace”. One of the main issues to take into account when designing a workplace is adaptability to changes in production; flexibility is essential to cope with the market’s needs. The concept of human-centered workplace has emerged to address this issue, outlining the necessity to include ergonomics principles in the workplace design process (Giacomin, 2014; Caputo et al., 2018; Caputo et al., 2019; Turner and Oyekan, 2023).

Several ergonomics assessment methods are observational methods and look primarily into postural analysis such as Ovako Working Posture Analyzing System, the Rapid Upper Limb Assessment or the Rapid Entire Body Assessment (Paudel et al., 2022; Deros et al., 2011; Tangcuangco and Nacion, 2019; Marín and Marín, 2021). Typically, these methods are used to evaluate existing work environments in a reactive approach to ergonomics. The high cost of physical prototypes greatly reduces the possibility of a proactive approach to ergonomic issues, the goal of which is precisely to discover potential problems before they emerge in the workplace.

VR technologies are emerging as pivotal tools in proactive ergonomics, as they can help workplace designers to overcome this limitation, enabling developers to visualize in an immersive environment all the elements of a workstation in an equivalent 1:1 scale. Moreover, VR solutions may be profitable when combined with motion capture (MoCap) technologies (Jayaram et al., 2006; Di Pardo et al., 2008; Battini et al., 2018) to display workers’ movements and postures. Menolotto et al. (2020) present a systematic literature review for different MoCap technologies and the issues related to data management and processing.

The general benefits in combining ergonomics principles, VR and MoCap solutions for the human-centered design of workplaces can be summarized as (Whitman et al., 2004):

  • replicability of experiments and simulations;

  • possibility of recording and processing experimental data;

  • flexibility in creating environments that can respond to market needs while respecting existing standards; and

  • possibility of carrying out ergonomic assessments in real time.

The advantages provided by integrated VR–MoCap systems with respect to traditional computer-aided solutions are investigated in several papers (Pontonnier et al., 2014; Peruzzini et al., 2017; Vosniakos et al., 2017; Michalos et al., 2018; Caputo et al., 2017; Battini et al., 2018; Carnazzo et al., 2022; Kačerová et al., 2022; Dias Barkokebas and Li, 2023). A systematic literature review can be found in da Silva et al. (2020). Simonetto et al. (2022) propose a methodological framework to enable designers of assembly systems to take into account the different physical strength and joint mobility of workers because of, for example, different ages.

3. The proposed application: system architecture and workflow

This section provides a synthetic description of the proposed Unity-based application and the system workflow. The proposed application leverages the integration of MoCap and VR technologies to support the analysts in the ergonomic assessment of physical prototypes as well as future workplaces with a set of functionalities which includes: performing 3D postural analysis through motion capture data, providing a virtual reality environment where to assess the interaction between the worker and the workstation while monitoring postural indicators and estimating body postures and movements for workers of different anthropometric percentiles. These functionalities are briefly described hereafter.

3.1 The MoCap system for three-dimensional postural analysis

Postural comfort is central to workers’ well-being and human-centered workplace design. Traditionally, a trained analyst observes the worker and evaluates the joint angles by estimation of projected angles in videos or pictures of the analyzed work activity. Subjective observations are time-consuming and involve the potential for inter- and intra-observer variability.

In recent years, wearable sensors have demonstrated adequate accuracy for assessing posture and body movements and conducting quantitative ergonomic evaluations (Padilla et al., 2019; Menolotto et al., 2020). Data can be recorded, while the workers perform the task at the assembly line or on physical prototypes and provide valuable information on the position of the different joints even during long time operations.

Carnazzo et al. (2022) present an algorithm for postural analysis in 3D that collects data directly from IMUs fitted on the worker’s body with adjustable straps (Figure 1) and autonomously calculate the angles between body parts through inverse trigonometric functions. The different body angles for neck, torso, shoulders, elbows, wrists and knees are defined according to the relevant standards EN 1005–4 (UNI, 2009) and ISO 11226 (ISO, 2000) which set acceptable angles and holding times of working postures and constitute the basis for risk assessment methods.

To facilitate the analyst's viewing and retrieval of information, special attention was paid to creating an intuitive and straightforward user interface. At system startup, the analyst can select the mode of displaying motion-capturing data captured in real time or the analysis of prerecorded data with the possibility of synchronous recording of work activity using both RGB cameras and the IMU motion tracking system. For manual data synchronization, it is the user, wearing the IMU sensors, who initiates the recording phase with a clap of the hands.

Figure 2 is an example of the user interface for the analysis of pre-recorded data. Different views are available for the analyst to choose from and types of mannequin representation, that is, the visualization of the internal kinematic chain only with lines connecting spheres that represent the articular joints (green stick man), a full 3D humanoid figure or superimposition of both. The application allows all angle information displayed in a particular time frame to be saved. Specifically, through a button on the graphical interface, the analyst can save all the values of postural angles, video captures and the virtual scene in all display modes. This information is stored in HTML format files, potentially including a textual comment entered by the analyst.

Figure 3 shows the graphic user interface for joint angle visualization. A dropdown menu allows to select the postural angles that can be visualized on the screen. Ten different angles can be analyzed: the neck, the trunk, the shoulders, the elbows, the wrists and the knees. The identification of these angles is the basis of many risk assessment methods (Marín and Marín, 2021).

3.2 Integration of the MoCap system with immersive virtual reality technologies

Wearable sensors for motion capturing are a tool of great potential in research and automated ergonomic assessment. However, physical prototypes are costly and can take a significant amount of time to build. As discussed in the background section, an interesting development in virtualization is the combination of motion capturing techniques with VR technologies. The visual feedback provided by the head-mounted display allows recording the user’s posture and movements, while he or she interacts with a virtual environment. Thanks to VR technologies, postural indicators and risk assessments become possible in the absence of a physical prototype of the workstation.

Figure 4 depicts a scheme of the proposed software and hardware architecture. Postural data are recorded through the Xsens Awinda Motion Capture system, which includes 17 wireless IMU sensors fitted on the user’s body through adjustable straps, and forwarded to the Unity application by the Xsens MVN software. The Xsens software development kit is used to properly manage the data within the Unity application. The capture system provides accurate 3D orientation of the sensors with respect to the earth-referenced local frame to the Unity application (Paulich et al., 2018). Unity was chosen as the world's leading game engine for creating and deploying immersive experiences across multiple platforms and devices, and the proposed algorithm for ergonomic analysis was scripted in C#.

The evaluation process may or may not require a visor, depending on whether the simulation is performed virtually or on-site. The technology used for virtual reality simulations is that of visors, in particular the HTC VIVE Head Mounted Display. Users can see 3D stereoscopic images and establish their spatial position in the visual world by wearing an HMD helmet with motion tracking sensors. In some applications, the headset was combined with Leap Motion controllers, optical hand-tracking systems which allows users to manipulate digital objects with hand movements. Figure 5 shows on the left the image of the user equipped with the Xsens Awinda system, the HTC VIVE HMD and the Leap Motions controllers and on the right its virtual representation in the virtual environment. The mannequin (which will be referred to as the Xsens mannequin in the following) replicates the anthropometric dimensions of the user used in the recording session. The mannequin’s motions and postures are based on the data recorded by the motion capture system.

3.3 Postural prediction with different anthropometric percentiles

Body posture and postural angles are characteristics of the worker’s body size. However, recognizing human variability is key to ergonomics. Motion capturing and VR technologies can help predicting how users of different size would interact with a given workstation in the earliest stages of workstation validation and ensure that the workstation meets the safety and comfort requirements for the majority of users. In the automotive fields, as in many others, it is common to refer to the 5th, 50th and 95th percentiles (namely, P5, P50 and P95) for males and females, in the intent to accommodate the majority of the working population. However, repeating data recording sessions with users of different body sizes is time-consuming and expensive. It is, therefore, of great interest for companies to be able to use the data recorded on a single user to estimate the postures that a user of a different height is likely to assume when interacting with the same work environment.

Evaluating in advance the postural risk for workers of different height is important to ensure fulfillment of postural comfort for all users as well as the reachability needs of small users and the body space issues of larger users. The posture of the digital mannequin at any given time can be predicted using an IK engine that, in combination with postural rules defined by experienced ergonomists, can predict the movements of the digital human model (Castellone et al., 2017). The idea of IK originated in the field of robotics, in particular, to solve the problem of positioning a robotic arm in a specific and pre-defined position. The kinematic chain is an arrangement of rigid components connected by joints, as is the human skeletal system, and provides constrained movements. IK solvers may find multiple solutions to mannequin posture problems based on joint constraints or may not find a solution if the target point is not reachable.

Figure 6 depicts the general functioning of the proposed application for postural prediction. For the system to work properly, a preliminary motion capture session is required. Data recorded by the IMU sensors is exported to Unity where the user's movements are replicated. Direct mode operation allows the analyst to directly control the effectors of the hands, feet, hips, shoulders (both left and right) and pelvis of the mannequin, in case the analyst wishes to correct certain postures or some data is missing. The postural prediction algorithm is developed using Final IK, the leading IK library for Unity, which is largely adopted in the production of video games. This IK plugin for Unity provides a variety of modules: the FullBodyBiped IK has been used for animating the human avatar, whereas the LookAt IK component has been used to facilitate rotating the mannequin’s head toward the target point. These modules have been extended with additional ergonomic postural rules based on the Human Model (Castellone et al., 2017), which predicts the most likely posture that a worker assumes given the position of the feet and the working points of the hands. The benefit of this system is the possibility to compare the predicted posture of mannequins belonging to different percentiles and, thus, validate a workstation for a variety of users, taking into account the postural comfort for all users as well as the reachability needs of the P5 mannequin as well as the body space issues of the P95.

4. Some application cases

The proposed Unity-based application integrating MoCap and VR technologies has been applied to a variety of use cases, also to assess its validity and usability by the analyst.

The task reproduced in Figure 7 is the roof rack assembly, in which the worker must place the roof rack on the roof of the car and secure it with screws and brackets. This is a very interesting task to analyze from the point of view of both ergonomic risk assessment and of operator training. Assembling elements on the car roof may require the worker to assume incongruous postures with hands above shoulder height. In addition, there may be problems with reaching the inner points of the car roof. Thus, it is a task that may be worthwhile to analyze beforehand in virtual with respect to workers of different height and the possibility of creating a platform that elevates the worker by improving accessibility.

It is then worth noting that the roof rack assembly requires a sequence of operations to be performed in a specific order that the operator must learn to repeat and that requires good dexterity. In this respect, the VR environment offers the possibility of on-the-job training, immersing the worker in the task environment in which to carry out the work task, allowing him or her to not only practice but also provide feedback on how the workplace and the work task have been designed.

In a second step, the HTC VIVE HMD was integrated with the Leap Motion controllers to better simulate the action of grasping objects and, thus, the hand–object interaction (Figure 5). Motion capture recording sessions were conducted with users representing different heights, namely, P5, P50 and P95 female and male percentiles, to highlight critical issues in the prediction system.

Figure 8 reproduces the virtual workstation used for simulations in Unity, while Figure 9 shows the user interface for the analysis of the recorded data by the analyst. The mannequin on the right is the Xsens mannequin that replicates the worker’s posture using the information obtained from the motion capture session, while the mannequin on the left is posed by the postural prediction system (and will be referred to as the Final IK manikin in the following). The menu allows the analyst to change the size of the mannequin by direct input of anthropometric data or by selecting a target percentile for the postural prediction. If no percentile is selected, then the system by default runs with the P50 male mannequin. The menu also allows the analyst to change the camera orientation and ensure a correct view of the two mannequins. By importing the digital twin of the work environment into the Unity scene, the analyst can visualize how the mannequin moves in the scene and the system can detect potential collisions with objects and adjust the mannequin’s posture accordingly.

A significant challenge for any IK solver is reproducing realistic postures. The IK engine available in Unity is designed to work in gaming and return a pose that may not correctly take into consideration ergonomic principles at work. Typical examples are the reaching of working points close to the ground that may require significant forward bending of the back but should involve bending the knees (Figure 10a) or visual demands related to the work to be performed that are often overlooked by IK engines (Figure 10b). To overcome these limitations, additional postural rules were considered, based on the experience of the company's professional ergonomists.

These additional rules improve the effectiveness of the postural prediction module on different percentiles, as it was assessed by comparing the joint angles returned by the posture prediction module and those returned by the motion capture system. A more focused analysis was carried out on the most relevant postures present during the task. For each selected posture, the different joint angles were compared.

Figure 11 depicts on the left the posture assumed by the P5 male mannequin reproducing the anthropometry of the user on whom the motion capture recording was done and on the right the posture predicted for the P95 male. The postural prediction module returns a realistic posture even if the recording is made on a different percentile. The system recognizes that taller users would need to bend their back forward to reach the same working point and to bend their neck to reduce the viewing distance. The expected posture for the P95 reproduced on Figure 11 was determined based on posture strategy related to the implemented ergonomic rules.

Another aspect of interest for companies is the interaction with obstacles that may be present in the workstation by workers of different height. A possible case is, for instance, working underbody. As no experimental tests included a collision between the worker and an item of the workstation, the validation of the obstacle avoidance system was accomplished by providing the target coordinates. In this preliminary evaluation phase, the system proved capable of detecting collisions with objects. The posture strategies adopted by the mannequin to handle the interaction with an obstacle were plausible (Figure 12).

5. Conclusions and future work

The paper describes a Unity-based application that leverages wearable sensors and virtual reality technologies and how this can facilitate the application of ergonomic principles at the early stage of design in a proactive approach to ergonomics. The application supports analysts in workplace design and assessment and in the application of risk assessment tools. Joint angles are calculated in accordance with relevant technical standards and can be used by the analyst as input data in risk assessment tools, eliminating the inter- and intra-observer variability often associated with subjective observations. An innovative postural prediction module helps the analyst in predicting likely body postures for users belonging to different anthropometric percentiles as they interact with a given workstation that can be a physical prototype or a simulated environment in VR.

Recognizing human variability is fundamental to ergonomics as is ensuring fulfillment of postural comfort for all users as well as the reachability needs of small users and the body space issues of larger users. Future work will focus on enhancing the veracity of the predicted postures for a variety of users from a single data recording, especially in particular cases such as the presence of obstacles in the workplace. Workers may adopt several strategies to perform a task and indeed the main complexity of any postural prediction system lies precisely in the indeterminacy of the IK problem.

To resolve the indeterminacy of the IK problem, it is possible to add constraints in the form of postural rules, which may concern the nature of the task and the organization of the workstation. These postural rules can initially be defined based on the experience of professional ergonomists but can be improved through machine learning techniques as the amount of data grows.

The implementation of postural rules based on good ergonomics should be the basis of any supporting tool for the prevention of work-related musculoskeletal disorders during the design and evaluation of the workstation, but also for on-the-job training, employee engagement and performance in the work environment.

Wearable sensors and VR are key tools for a proactive approach to ergonomics as they are for on-the-job training in immersive virtual environments. Available VR systems ensure immersive and interactive environments: a sense of touch and force in response to their actions may also be provided to the users through haptic devices. The level of complexity, both logical and graphical, that can be achieved today in the realization of these virtual environments makes it possible to extend the use of VR beyond the assessment of postural load. The assessment of mental workload is becoming increasingly important and the use of VR in cognitive load research is as relevant as ever.

The great potential of these technologies for workplace learning involves analysts and designers as well as workers. Workplace analysts and designers can explore different scenarios in the virtual environment following a “what-if” approach and may be supported in their design choices and assessments from the early phases of design. They can also obtain important feedback from the workers on workplace design without the need for physical prototypes and ensure a truly participatory approach to design. The involvement of the different stakeholders is fundamental in ergonomics and necessary to ensure a productive and inclusive workplace.

Figures

Inertial measurement unit sensors worn by the user while executing a car underbody assembly task in the lab

Figure 1.

Inertial measurement unit sensors worn by the user while executing a car underbody assembly task in the lab

RGB camera videos of the assembly task (on the left) and the three-dimensional virtual manikin with joint angles (on the right)

Figure 2.

RGB camera videos of the assembly task (on the left) and the three-dimensional virtual manikin with joint angles (on the right)

Graphic user interface for joint angle visualization

Figure 3.

Graphic user interface for joint angle visualization

The proposed architecture

Figure 4.

The proposed architecture

The user equipped with inertial measurement unit sensors and virtual reality technologies (on the left) and its virtual representation (on the right)

Figure 5.

The user equipped with inertial measurement unit sensors and virtual reality technologies (on the left) and its virtual representation (on the right)

Descriptive diagram of the general operation of the system for posture prediction

Figure 6.

Descriptive diagram of the general operation of the system for posture prediction

(a) Capture of the scene where the user assembles components in a virtual environment and (b) Representative scene in virtual reality

Figure 7.

(a) Capture of the scene where the user assembles components in a virtual environment and (b) Representative scene in virtual reality

Digital twin car assembly line: virtual workstation used for simulations in Unity

Figure 8.

Digital twin car assembly line: virtual workstation used for simulations in Unity

The postural prediction module. On the right the Xsens mannequin that uses the MoCap session; on the left the Final IK mannequin posed by the postural prediction system and whose stature percentile can be selected by the analyst

Figure 9.

The postural prediction module. On the right the Xsens mannequin that uses the MoCap session; on the left the Final IK mannequin posed by the postural prediction system and whose stature percentile can be selected by the analyst

Examples of additional postural rules to help predicting realist postures in a work environment

Figure 10.

Examples of additional postural rules to help predicting realist postures in a work environment

P5 male recording on the left and P95 male prediction on the right

Figure 11.

P5 male recording on the left and P95 male prediction on the right

An example of obstacle avoidance pose

Figure 12.

An example of obstacle avoidance pose

References

Badri, A., Boudreau-Trudel, B. and Saâdeddine Souissi, A. (2018), “Occupational health and safety in the industry 4.0 era: a cause for major concern?”, Safety Science, Vol. 109, pp. 403-411.

Battini, D., Calzavara, M., Persona, A., Sgarbossa, F., Visentin, V. and Zennaro, I. (2018), “Integrating mocap system and immersive reality for efficient human-centred workstation design”, IFAC-PapersOnLine, Vol. 51 No. 11, pp. 188-193.

Beevis, D. (2003), “Ergonomics—costs and benefits revisited”, Applied Ergonomics, Vol. 34 No. 5, pp. 491-496.

Buckle, P.W. and Devereux, J.J. (2002), “The nature of work-related neck and upper limb musculoskeletal disorders”, Applied Ergonomics, Vol. 33 No. 3, pp. 207-217.

Burdea, G.C. and Coiffet, P. (2003), Virtual Reality Technology, Wiley, NJ.

Caputo, F., Greco, A., D’Amato, E., Notaro, I. and Spada, S. (2018), “On the use of virtual reality for a human-centered workplace design”, Procedia Structural Integrity, Vol. 8, pp. 297-308.

Caputo, F., Greco, A., Egidio, D.A., Notaro, I. and Spada, S. (2017), “A preventive ergonomic approach based on virtual and immersive reality”, Proceedings of the International Conference on Applied Human Factors and Ergonomics, pp. 3-15.

Caputo, F., Greco, A., Fera, M. and Macchiaroli, R. (2019), “Digital twins to enhance the integration of ergonomics in the workplace design”, International Journal of Industrial Ergonomics, Vol. 71, pp. 20-31.

Carnazzo, C., Spada, S., Lamacchia, S., Manuri, F., Sanna, A. and Cavatorta, M.P. (2022), “Real-time data analysis and 3D representation for postural assessment in manufacturing processes”, Proceedings of SIE2022 Conference Proceedings, pp. 124-132.

Castellone, R., Spada, S., Caiazzo, G. and Cavatorta, M.P. (2017), “Assessment of anthropometric differences in the design of workstations: case studies of an automotive assembly line”, International Journal of Applied Engineering Research, Vol. 12 No. 14, pp. 4549-4555.

da Silva, A.G., Winkler, I., Gomes, M.M. and Pinto, U.D.M. (2020), “Ergonomic analysis supported by virtual reality: a systematic literature review”, Proceedings of the IEEE 22nd Symposium on Virtual and Augmented Reality, pp. 463-468.

Deros, B.M., Khamis, N.K., Ismail, A.R., Jamaluddin, H., et al.. (2011), “An ergonomics study on assembly line workstation design”, American Journal of Applied Sciences, Vol. 8 No. 11, pp. 1195-1201.

Di Pardo, M., Riccio, A., Sessa, F., Naddeo, A. and Talamo, L. (2008), “Methodology development for ergonomic analysis of work-cells in virtual environment (no. 2008-01-1481)”, SAETechnical Paper.

Dias Barkokebas, R. and Li, X. (2023), “VR-RET: a virtual reality–based approach for real-time ergonomics training on industrialized construction tasks”, Journal of Construction Engineering and Management, Vol. 149 No. 10, pp. 4023098-1-4023098-18.

Giacomin, J. (2014), “What is human centred design?”, The Design Journal, Vol. 17 No. 4, pp. 606-623.

ISO (2000), “Ergonomics – evaluation of static working postures”, ISO 11226.

Isusi, I. (2020), “Work-related musculoskeletal disorders—facts and figures (synthesis of 10 national reports)”, European Agency for Safety and Health at Work, Publications Office of the European Union, Luxembourg.

Jayaram, U., Jayaram, S., Shaikh, I., Kim, Y. and Palmer, C. (2006), “Introducing quantitative analysis methods into virtual environments for real-time and continuous ergonomic evaluations”, Computers in Industry, Vol. 57 No. 3, pp. 283-296.

Kačerová, I., Kubr, J., Hořejší, P. and Kleinová, J. (2022), “Ergonomic design of a workplace using virtual reality and a motion capture suit”, Applied Sciences, Vol. 12 No. 4, p. 2150.

Launis, M., Vuori, M. and Lehtelä, J. (1996), “Who is the workplace designer?–Towards a collaborative mode of action”, International Journal of Industrial Ergonomics, Vol. 17 No. 4, pp. 331-341.

Marín, J. and Marín, J.J. (2021), “Forces: a motion capture-based ergonomic method for the today’s world”, Sensors, Vol. 21 No. 15, pp. 1-30.

Menolotto, M., Komaris, D.-S., Tedesco, S., O’Flynn, B. and Walsh, M. (2020), “Motion capture technology in industrial applications: a systematic review”, Sensors, Vol. 20 No. 19, p. 5687.

Michalos, G., Karvouniari, A., Dimitropoulos, N., Togias, T. and Makris, S. (2018), “Workplace analysis and design using virtual reality techniques”, CIRP Annals, Vol. 67 No. 1, pp. 141-144.

Oxenburgh, M.S. (2010), “Cost-benefit analysis of ergonomics programs”, American Industrial Hygiene Association Journal, Vol. 58 No. 2, pp. 150-156.

Padilla, B.D., Glushkova, A., Menychtas, D. and Manitsaris, S. (2019), “Designing a web based automatic ergonomic assessment using motion data”, Proceedings of PETRA 2019. The 12th Pervasive Technologies Related to Assistive Environments Conference, HAL Open Science, pp. 528-534.

Paudel, P., Kwon, Y.-J., Kim, D.-H. and Choi, K.-H. (2022), “Industrial ergonomics risk analysis based on 3D-human pose estimation”, Electronics, Vol. 11 No. 20, pp. 1-17.

Paulich, M., Schepers, M., Rudigkeit, N. and Bellusci, G. (2018), “Xsens MTw Awinda: miniature wireless inertial-magnetic motion tracker for highly accurate 3D kinematic applications”, Xsens: Enschede, The Netherlands, pp. 1-9.

Peruzzini, M., Carassai, S. and Pellicciari, M. (2017), “The benefits of human-centred design in industrial practices: re-design of workstations in pipe industry”, Procedia Manufacturing, Vol. 11, pp. 1247-1254.

Pontonnier, C., Dumont, G., Samani, A., Madeleine, P. and Badawi, M. (2014), “Designing and evaluating a workstation in real and virtual environment: toward virtual reality based ergonomic design sessions”, Journal on Multimodal User Interfaces, Vol. 8 No. 2, pp. 199-208.

Shi, Z., Xie, Y., Xue, W., Chen, Y., Fu, L. and Xu, X. (2020), “Smart factory in industry 4.0”, Systems Research and Behavioral Science, Vol. 37 No. 4, pp. 607-617.

Simonetto, M., Arena, S. and Peron, M. (2022), “A methodological framework to integrate motion capture system and virtual reality for assembly system 4.0 workplace design”, Safety Science, Vol. 146, p. 105561.

Tangcuangco, A.L. and Nacion, A.L.C. (2019), “Utilization of participatory ergonomics for workstation evaluation towards productive manufacturing”, International Journal of Engineering and Advanced Technology, Vol. 9 No. 1, pp. 579-584.

Turner, C. and Oyekan, J. (2023), “Manufacturing in the age of human-centric and sustainable industry 5.0: Application to holonic, flexible, reconfigurable and smart manufacturing systems”, Sustainability, Vol. 15 No. 13, p. 10169.

UNI (2009), “Safety of machinery – human physical performance – part 4: evaluation of working postures and movements in relation to machinery”, UNI EN 1005–4.

Vosniakos, G.-C., Deville, J. and Matsas, E. (2017), “On immersive virtual environments for assessing human-driven assembly of large mechanical parts”, Procedia Manufacturing, Vol. 11, pp. 1263-1270.

Whitman, L., Jorgensen, M., Hathiyari, K. and Malzahn, D. (2004), “Virtual reality: its usefulness for ergonomic analysis”, Proceedings of the 2004Winter Simulation Conference, Washington, DC, pp. 1740-1745.

Zuehlke, D. (2010), “SmartFactory—towards a factory-of-things”, Annual Reviews in Control, Vol. 34 No. 1, pp. 129-138.

Acknowledgements

This work is part of the two-year project IM.PR.ES.S.E.D. (IMmersive PRocESs ergonomicS by wEarable Devices) coordinated by Stellantis (FCA Italy) in collaboration with Politecnico di Torino, Università della Campania Luigi Vanvitelli, CIM4.0 (Competence Industry Manufacturing 4.0) of Torino for technical and development consultancy and Mare Digital for immersive reality tools.

The authors would like to thank the many graduate students that contributed to the project. In particular, they wish to acknowledge Stefania Tagliaferro for her support in the implementation and validation of the posture prediction module.

Corresponding author

Maria Pia Cavatorta can be contacted at: maria.cavatorta@polito.it

Related articles