Experiment

Design

The experimental design is divided into two sequential stages, as illustrated in Fig. 1.

Fig. 1 Experimental design

Fig. 1 Experimental design

The first stage - called the "individual setting" - runs for one round. In this stage, participants perform the task individually (without any role assignment), and there are no groups. This stage allows us to measure subjects' intrinsic propensity to choose a specific reporting method. All participants are asked to roll a fair six-sided die, the outcome of which determines their earnings. Specifically, they are paid the equivalent (in Euros) of the reported die-rolling outcome (e.g., if the die-rolling outcome is six, they receive six Euros). Before rolling the die, each participant is asked to choose how to report the result: via either a computer draw ("automatic reporting") or a self-reported die roll ("self-reporting"). Subjects who choose automatic reporting roll a virtual die, and the computer automatically reports the result on their screen, thus making any manipulation of the die-rolling outcome impossible. Subjects who choose self-reporting roll a physical die privately, and input themselves the result on their screen. Here, cheating is possible, profitable (since one's profits increase with the reported die-rolling outcome), and undetectable (since the die-rolling outcomes are unobservable by the experimenter and other subjects, except statistically). Indeed, although we cannot observe the true die-rolling outcome at the individual level, we can statistically detect the degree of misreporting by comparing the observed occurrence of each realization with its theoretical occurrence derived from a uniform distribution.

In our setting, cheating increases one's individual or group payoff without generating negative externalities on anyone else aside from the experimenter. Hence, each group would be better off if all members fraudulently overstate earnings by self-reporting the highest die-rolling outcome, adhering to a norm of cooperation (group payoff maximization) at the expense of a norm of honesty.

In the second stage - called the "group setting" - we randomly assign the roles of leaders and workers to participants and cluster them in "experimental firms," i.e., four-member groups with one leader and three workers. Participants are informed that the matching process is not affected by anything that happened in the individual setting. As in the first stage, the task is a reporting choice followed by the corresponding die-rolling task, but now it is played in a group setting, albeit still privately.

Each group carries out the same task for ten rounds, wherein the roles and group compositions remain fixed. In each round, the leader acts as the first mover, and the workers act simultaneously as second movers after observing their leader's reporting choice. At the end of the experiment, the computer randomly selects one of the ten rounds to determine the individual payment in this stage.

The group payoff is computed as the sum of the four members' reported die-rolling outcomes. Individual payoffs are computed as a share of the group payoff, where that share depends upon the treatment in place (described in the next paragraphs). To isolate the effect of the reporting choice from the reported die-rolling outcome, information about the reported outcomes remains private throughout the entire experiment. Importantly, unlike D'Adda et al., each worker receives no information between rounds about the other workers' decisions, their own payoff, nor the group payoff. This information is only disclosed at the end of the study. This means that during the experiment, workers cannot observe how their leader punished or rewarded their reporting choices. We deliberately choose this design because here we are primarily focused on how - and which type of - leaders use the incentive power. We are not interested in the effect of leaders' actual punishment on workers' behavior, as this aspect has already been explored in prior contributions. More interestingly, our design allows us to test whether the sole information that a leader can punish or reward - without knowing whether s/he actually does so - shapes workers' behavior.

We shall remark that the absence of feedback to workers between rounds does not nullify the purpose of having ten repeated rounds. Round repetition allows us to evaluate whether leaders' punishment changes with the size of group payoff. Even if leaders and workers stick to the same reporting choices across time, group payoff may vary between rounds, since it depends on chance (for those choosing automatic reporting) and the size of the lie (for those choosing self-reporting).

We use a 3 × 2 between-subject design, where we vary the leaders' ability to (i) choose the reporting method (mandatorily assigned vs voluntarily chosen), and (ii) set the share of the group payoff awarded to each worker ("incentive power"). Table 1 summarizes the design.

Table 1 Treatment conditions

 

Reporting choice

Voluntary

Mandatory self-reporting

Mandatory automatic reporting

Incentive power

Yes

Leaders can freely choose how to report the die-rolling: automatic or self-reporting

Leaders cannot freely choose how to report the die-rolling: they can only use self-reporting

Leaders cannot freely choose how to report the die-rolling: they can only use automatic reporting

Leaders can freely decide how to split the group payoff

Leaders can freely decide how to split the group payoff

Leaders can freely decide how to split the group payoff

No

Leaders can freely choose how to report the die-rolling: automatic or self-reporting

Leaders cannot freely choose how to report the die-rolling: they can only use self-reporting

Leaders cannot freely choose how to report the die-rolling: they can only use automatic reporting

Leaders cannot freely decide how to split the group payoff: they have to split it equally

Leaders cannot freely decide how to split the group payoff: they have to split it equally

Leaders cannot freely decide how to split the group payoff: they have to split it equally


In the voluntary reporting treatment, leaders can choose the reporting method (automatic or self-reporting), whereas in the two mandatory reporting treatments they are exogenously assigned to a reporting method (either automatic or self-reporting). Workers are informed about whether their leaders can or cannot choose the reporting method.

In the treatments without leaders' incentive power, the group payoff is equally shared among the four group members, i.e., each player receives ¼. In the treatments with leaders' incentive power, leaders receive ¼ of the group payoff and can freely allocate (equally or not) the remaining ¾ of the group payoff among workers. Importantly, the leader is free to choose any allocation that sums up to 0% (i.e., allocating zero to every worker), or 100% (i.e., allocating at least a positive amount to one worker). This choice gives leaders the possibility to provide each worker not only a reward - as in D'Adda et al. - but also a punishment (i.e., a share below 33% of the workers' total payoff). In our design, any undistributed part would be "wasted" (i.e., it returns to the experimenter). Accordingly, the leaders cannot keep any part of the remaining group payoff for themselves.

To sum up, each of the ten rounds comprises the following subsequent steps:

  1. The leader is assigned to (if mandatory reporting treatment) or has to choose (if voluntary reporting treatment) the reporting method: automatic or self-reporting;
  2. The workers are informed about whether their leader's reporting method was mandatorily assigned or voluntarily chosen, and which reporting method has been assigned or chosen by the leader;
  3. The workers choose their reporting method, which remains hidden to the other group members except for the leader;
  4. Both the leader and workers roll their own die, whose outcome remains private;
  5. The leader is informed about his/her own workers' reporting choices made in step 3, and the group payoff (the sum of the group members' reported die-rolling outcomes);
  6. Individual payoffs are computed: in the treatments without leaders' incentive power, the group payoff is equally shared among the group members, i.e., each receives ¼; in the treatments with leaders' incentive power, the leader receives ¼ of the group payoff and can freely allocate (equally or not) the remaining ¾ of the group payoff among workers.

After Stage 2, following Krupka and Weber and D'Adda et al., we elicit subjects' perceptions of how appropriate is to choose a reporting system not aligned with the leader's one, and inflate the outcome of the die roll. By using Krupka and Weber's procedure, we describe a set of hypothetical reporting choices a subject might have made in the experiment and ask participants to evaluate the social appropriateness of each action on a 4-point scale taking the following values: "Very Socially Unacceptable," "Somewhat Socially Unacceptable," "Somewhat Socially Acceptable," "Very Socially Acceptable." We incentivize answers by paying an extra €0.50 per question if their answer matches the one provided by another randomly selected participant in the same session. This matching technique directly follows from Krupka and Weber, D'Adda et al., and others. It is meant to give participants an incentive to think in terms of the socially recognized perceptions of the appropriateness of the described action, rather than their own personal perception (on personal vs social norms). After those incentivized questions, following Gibson et al. and D'Adda et al. we collect participants' opinions about misreporting behaviors and truthfulness in private organizations, and individual sociodemographic measures. See Appendix B in the supplementary material for details.


Procedures

The experiment was conducted in April and May 2018 at the Laboratory for Research in Experimental and Behavioral Economics (LINEEX) of the University of Valencia, Spain. In total, we recruited 240 students, with 40 subjects (10 firms) per treatment. Participants in our experiment were students aged 21 years on average, and 37% of them were females. More than half of them were students in social sciences (economics and other subjects). As anticipated, we are aware that using a lab experiment with a sample of students is suboptimal relative to employing real business leaders in a field experiment. However, existing literature has generally advocated in favor of the external validity of lab experiments. Moreover, our approach is similar to existing studies on leadership like D'Adda et al. and Brandts et al., which have used student samples seeking to derive meaningful insights for real organizations.

The experiment was computerized using the software z-Tree. Participants performed all of the experimental tasks via computer, except the die-rolling task in the self-reporting condition. In this case, participants had to roll a physical die placed near their computer. To ensure anonymity, participants were informed that their decisions during the experiment - as well as their final payment - would be linked to a client ID number but their identity would remain confidential. To further ensure confidentiality, payments were issued in cash at the end of the session to one participant at a time. Each session lasted approximately one hour, and participants earned on average €15, including the show-up fee of €5. An English translation of the instructions provided to the participants is available in the supplementary material.

At the end of the instructions for the second stage of the experiment, and before starting that stage, subjects were asked a set of computerized questions to check their understanding of the game. They were provided with prompt feedback via computer and asked to raise their hand when they gave an incorrect answer. In this case, the lab assistant approached the student who raised the hand to explain the mistake and the correct answer. No major issues were encountered. The detailed deliverable with results from the comprehension questions and any other questions raised in each experimental session is available in the supplementary material.


Summary Statistics and Methods

Table 2 reports summary statistics of reporting choices ("Automatic" and "Self-reporting") across the two settings of the experiment ("Group" and "Individual"). Specifically, Table 2a reports the frequency and percentage of the two reporting choices with observations pooled across roles. In the individual setting, we have a total of 240 observations. In the group setting, where subjects perform the task across ten rounds, we have a total of 2400 observations. Table 2b shows the frequency and percentage of the two reporting choices with observations partitioned by role - i.e., leaders in the voluntary treatments (200 observations) and workers in all treatments (1800 observations) - for a total of 2000 observations. Those statistics highlight a strong preference for self-reporting (which is chosen by approx. 80% of subjects in either the individual or group setting), especially among subjects assigned to the role of leaders (86.5% in the voluntary treatment).

Table 2 Summary statistics of reporting choices

(a) Pooled across roles

Setting

Group

Individual

Freq

Pct. (%)

Freq

Pct. (%)

Automatic

552

23

48

20

Self-reporting

1848

77

192

80

Total

2400

100

240

100

(b) Partitioned by role

Setting

Leaders (voluntary treatment)

Workers (all treatments)

Total

Group

Individual

Group

Individual

Group

Individual

Freq

Pct. (%)

Freq

Pct. (%)

Freq

Pct. (%)

Freq

Pct. (%)

Freq

Pct. (%)

Freq

Pct. (%)

Automatic

27

13.5

12

20

325

18.06

36

20

352

17.6

48

20

Self-reporting

173

86.5

48

80

1475

81.94

144

80

1648

82.4

192

80

Total

200

100

60

100

1800

100

180

100

2000

100

240

100


For the analysis, we employ regression models that include a set of control variables. According to previous research, two demographic characteristics may influence unethical behaviors, namely gender and age. Accordingly, we control for age (in years) and a gender dummy (1 = female, 0 = male). We also control for subjects' fields of study, which may influence misconduct. To preserve space, in our regression tables we refer to this set of variables as Individual controls, and mark their joint inclusion with a check symbol "✓." Moreover, our regressions include round fixed effects to account for differences in reporting behavior along with the various rounds of the experiment. Standard errors - reported in parentheses - are clustered by group and round.