Gino involved in data collection? |
Co-authors have/had raw data? |
Data for reproducing results available? | |
---|---|---|---|
Experiment 1 | Yes (1) |
No (1) |
Only authors (1) |
Experiment 2 | Yes (1) |
No (1) |
Only authors (1) |
Gino involved in data collection? | Co-authors have/had raw data? | Data for reproducing results available? | |
---|---|---|---|
Experiment 1 | Yes | Never | Yes, but not posted |
Experiment 2 | Yes | Never | Yes, but not posted |
Below I describe the process through which I have evaluated paper. Before doing so, I wish to emphasize that I remain agnostic to the truth of allegations against Francesca Gino, and have not seen sufficient public evidence of intentional manipulation or data fraud to warrant suspicion in any other paper on which Francesca Gino is a coauthor. I am hopeful that as more information is revealed from the secret HBS investigation, more definitive conclusions can be reached. Nevertheless, I am evaluating my papers carefully now for transparency reaons, to best of my abilities and available data.
This paper includes two similar studies measuring how randomly allocated endowments change how people evaluate cheating. Both studies were conducted by Francesca Gino using pencil and paper data gathering and separately analyzed by both authors from an excel file created by RAs (to the best of my memory). I have all the original emails and the excel file as sent to me in June, 2008. I also have the original Stata do-file and log-file that I used for the analysis. We do not have the paper and pencil sheets, which were not kept past several years per standard practice and I believe IRB protocol requirements. Thus, I infer nothing from the non-existence of those physical data.
I note that these studies are incentive-compatible and use strong manipulations of $20 (~$29 today) endowments with real visible cash as well as strong financial incentives for cheating.
I have nearly finished constructing an extensive replication of the main results using the excel data as sent to me. Everything replicates well, with a few minor reporting errors. I also ran extensive forensics on the data to search for anything that might indicate falsified data. This analysis was independently repeated by another scholar with similar forensics experience to me. We independently found no evidence of data manipulation or fabrication. The forthcoming replication packet will also detail any errors or approaches that were normative in 2008 but no longer considered best practices, as well as any the
BELOW WAS EDITED ON 11/7/23
I originally decided not to publicly post the data or replication packet because of concerns that evaluations will not be well-adjudicated in the current public sphere. As I noted, my intention was for scholars wishing to view and evaluate this replication packet to contact me directly for it, but many people were not satisfied with this process, and I now understand why. Given that, I will post the data and replication packet as soon as possible on OSF. I will update this once that package is available. Putting this together with 15-year-old data, email chains, and notes has been time consuming. I am fortunate to even be able to do so.
I will also be supportive of anyone wishing to conduct a thoughtful and careful replication of the studies, as I would for any other study.