Russell Cropanzano, Marion Fortin and Jessica F. Kirk
Justice rules are standards that serve as criteria for formulating fairness judgments. Though justice rules play a role in the organizational justice literature, they have seldom…
Abstract
Justice rules are standards that serve as criteria for formulating fairness judgments. Though justice rules play a role in the organizational justice literature, they have seldom been the subject of analysis in their own right. To address this limitation, we first consider three meta-theoretical dualities that are highlighted by justice rules – the distinction between justice versus fairness, indirect versus direct measurement, and normative versus descriptive paradigms. Second, we review existing justice rules and organize them into four types of justice: distributive (e.g., equity, equality), procedural (e.g., voice, consistent treatment), interpersonal (e.g., politeness, respectfulness), and informational (e.g., candor, timeliness). We also emphasize emergent rules that have not received sufficient research attention. Third, we consider various computation models purporting to explain how justice rules are assessed and aggregated to form fairness judgments. Fourth and last, we conclude by reviewing research that enriches our understanding of justice rules by showing how they are cognitively processed. We observe that there are a number of influences on fairness judgments, and situations exist in which individuals do not systematically consider justice rules.
Details
Keywords
Christopher M. Castille and Larry J. Williams
In this chapter, the authors critically examine the application of unmeasured latent method factors (ULMFs) in human resource and organizational behavior (HROB) research, focusing…
Abstract
In this chapter, the authors critically examine the application of unmeasured latent method factors (ULMFs) in human resource and organizational behavior (HROB) research, focusing on addressing common method variance (CMV). The authors explore the development and usage of ULMF to mitigate CMV and highlight key debates concerning measurement error in the HROB literature. The authors also discuss the implications of biased effect sizes and how such bias can lead HR professionals to oversell interventions. The authors provide evidence supporting the effectiveness of ULMF when a specific assumption is held: a single latent method factor contributes to the data. However, the authors dispute this assumption, noting that CMV is likely multidimensional; that is, it is complex and difficult to fix with statistical methods alone. Importantly, the authors highlight the significance of maintaining a multidimensional view of CMV, challenging the simplification of a CMV as a single source. The authors close by offering recommendations for using ULMFs in practice as well as more research into more complex forms of CMV.
Details
Keywords
Hendrik Slabbinck and Adriaan Spruyt
The idea that a significant portion of what consumers do, feel, and think is driven by automatic (or “implicit”) cognitive processes has sparked a wave of interest in the…
Abstract
The idea that a significant portion of what consumers do, feel, and think is driven by automatic (or “implicit”) cognitive processes has sparked a wave of interest in the development of assessment tools that (attempt to) capture cognitive processes under automaticity conditions (also known as “implicit measures”). However, as more and more implicit measures are developed, it is becoming increasingly difficult for consumer scientists and marketing professionals to select the most appropriate tool for a specific research question. We therefore present a systematic overview of the criteria that can be used to evaluate and compare different implicit measures, including their structural characteristics, the extent to which (and the way in which) they qualify as “implicit,” as well as more practical considerations such as ease of implementation and the user experience of the respondents. As an example, we apply these criteria to four implicit measures that are (or have the potential to become) popular in marketing research (i.e., the implicit association test, the evaluative priming task, the affect misattribution procedure, and the propositional evaluation paradigm).
Details
Keywords
Adam J. Vanhove, Tiffany Brutus and Kristin A. Sowden
In recent years, a wide range of psychosocial health interventions have been implemented among military service members and their families. However, there are questions over the…
Abstract
In recent years, a wide range of psychosocial health interventions have been implemented among military service members and their families. However, there are questions over the evaluative rigor of these interventions. We conducted a systematic review of this literature, rating each relevant study (k = 111) on five evaluative rigor scales (type of control group, approach to participant assignment, outcome quality, number of measurement time points, and follow-up distality). The most frequently coded values on three of the five scales (control group type, participant assignment, and follow-up distality) were those indicating the lowest level of operationally defined rigor. Logistic regression results indicate that the evaluative rigor of intervention studies has largely remained consistent over time, with exceptions indicating that rigor has decreased. Analyses among seven military sub-populations indicate that interventions conducted among soldiers completing basic training, soldiers returning from combat deployment, and combat veterans have had, on average, the greatest evaluative rigor. However, variability in mean scores across evaluative rigor scales within sub-populations highlights the unique methodological hurdles common to different military settings. Recommendations for better standardizing the intervention evaluation process are discussed.